Simulations Find Answers Blowing in the Wind

The Energy Exchange explores the complex and critical intersection of energy, money and technology. Experts will use their insights and forecasts to outline what energy is available to us, the costs associated with that energy production and its use, and the technological innovations changing the way we utilize Earth’s resources to power our way of life.

 

As energy demands shift from fossil fuels, the world will turn to other energy sources. One of those is wind energy. Energy Exchange Host David Hidinger came across Joachim Toftegaard Hansen’s work when he published a few papers on wind farms of the future.

On this episode of Energy Exchange, Hidinger talked with Hansen, a Fluid Mechanics Engineer at Aerotak and Master’s Student at the Technical University of Denmark, about his research and writing on wind farms.

Hansen’s undergrad thesis dissertation focused on wind turbines. He studied at Oxford Brookes University, where they have an advanced research computing facility central to his work. With a long-time interest in structural mechanics, he set out on understanding wind turbines. While on holiday back in Denmark, he started reaching out to wind turbine companies and asking about working on a project. They didn’t do projects with undergrad students, but he found some information on YouTube about an American professor.

He found CalTech Professor John Dabiri and his unique vertical-axis wind turbines. Inspired, he thought about ‘what if you ran a CFD (Computational Fluid Dynamics) test on these turbines?’ Dabiri had run a lot of tests, but Hansen understood the future of these turbines lay in CFD. He pitched the idea, and his thesis was approved.

“A numerical way to solve these famous Navier stokes equations,” Hansen said. These types of partial-differential equations can solve the most challenging problems in fluid dynamics. This is why the computers at Brookes were so central to Hansen’s work. He literally had to run thousands of mathematical simulations on supercomputers.

“If I had to do those [the simulations] on a normal computer, it would take about 40 years,” Hansen said. The results took time, but not as long.

Listen to learn more about the fascinating future of wind turbines and hear Hansen’s results.

Follow us on social media for the latest updates in B2B!

Twitter – @MarketScale
Facebook – facebook.com/marketscale
LinkedIn – linkedin.com/company/marketscale

 

Follow us on social media for the latest updates in B2B!

Image

Latest

GPU infrastructure
Amberd Moves to the Front of the Line With QumulusAI’s GPU Infrastructure
February 18, 2026

Reliable GPU infrastructure determines how quickly AI companies can execute. Teams developing private LLM platforms depend on consistent high-performance compute. Shared cloud environments often create delays when demand exceeds available capacity Mazda Marvasti, CEO of Amberd, says waiting for GPU capacity did not align with his company’s pace. Amberd required guaranteed availability to support…

Read More
private LLM
QumulusAI Secures Priority GPU Infrastructure Amid AWS Capacity Constraints on Private LLM Development
February 18, 2026

Developing a private large language model(LLM) on AWS can expose infrastructure constraints, particularly around GPU access. For smaller companies, securing consistent access to high-performance computing often proves difficult when competing with larger cloud customers. Mazda Marvasti, CEO of Amberd AI,  encountered these challenges while scaling his company’s AI platform. Because Amberd operates its own…

Read More
custom AI chips
Custom AI Chips Signal Segmentation for AI Teams, While NVIDIA Sets the Performance Ceiling for Cutting-Edge AI
February 18, 2026

Microsoft’s introduction of the Maia 200 adds to a growing list of hyperscaler-developed processors, alongside offerings from AWS and Google. These custom AI chips are largely designed to improve inference efficiency and optimize internal cost structures, though some platforms also support large-scale training. Google’s offering is currently the most mature, with a longer production…

Read More
GPUs
OpenAI–Cerebras Deal Signals Selective Inference Optimization, Not Replacement of GPUs
February 18, 2026

OpenAI’s partnership with Cerebras has raised questions about the future of GPUs in inference workloads. Cerebras uses a wafer-scale architecture that places an entire cluster onto a single silicon chip. This design reduces communication overhead and is built to improve latency and throughput for large-scale inference. Mark Jackson, Senior Product Manager at QumulusAI, says…

Read More